Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 70
Filtrar
1.
Surg Endosc ; 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38653899

RESUMO

BACKGROUND: The learning curve in minimally invasive surgery (MIS) is lengthened compared to open surgery. It has been reported that structured feedback and training in teams of two trainees improves MIS training and MIS performance. Annotation of surgical images and videos may prove beneficial for surgical training. This study investigated whether structured feedback and video debriefing, including annotation of critical view of safety (CVS), have beneficial learning effects in a predefined, multi-modal MIS training curriculum in teams of two trainees. METHODS: This randomized-controlled single-center study included medical students without MIS experience (n = 80). The participants first completed a standardized and structured multi-modal MIS training curriculum. They were then randomly divided into two groups (n = 40 each), and four laparoscopic cholecystectomies (LCs) were performed on ex-vivo porcine livers each. Students in the intervention group received structured feedback after each LC, consisting of LC performance evaluations through tutor-trainee joint video debriefing and CVS video annotation. Performance was evaluated using global and LC-specific Objective Structured Assessments of Technical Skills (OSATS) and Global Operative Assessment of Laparoscopic Skills (GOALS) scores. RESULTS: The participants in the intervention group had higher global and LC-specific OSATS as well as global and LC-specific GOALS scores than the participants in the control group (25.5 ± 7.3 vs. 23.4 ± 5.1, p = 0.003; 47.6 ± 12.9 vs. 36 ± 12.8, p < 0.001; 17.5 ± 4.4 vs. 16 ± 3.8, p < 0.001; 6.6 ± 2.3 vs. 5.9 ± 2.1, p = 0.005). The intervention group achieved CVS more often than the control group (1. LC: 20 vs. 10 participants, p = 0.037, 2. LC: 24 vs. 8, p = 0.001, 3. LC: 31 vs. 8, p < 0.001, 4. LC: 31 vs. 10, p < 0.001). CONCLUSIONS: Structured feedback and video debriefing with CVS annotation improves CVS achievement and ex-vivo porcine LC training performance based on OSATS and GOALS scores.

3.
Endoscopy ; 56(2): 131-150, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38040025

RESUMO

This ESGE Position Statement provides structured and evidence-based guidance on the essential requirements and processes involved in training in basic gastrointestinal (GI) endoscopic procedures. The document outlines definitions; competencies required, and means to their assessment and maintenance; the structure and requirements of training programs; patient safety and medicolegal issues. 1: ESGE and ESGENA define basic endoscopic procedures as those procedures that are commonly indicated, generally accessible, and expected to be mastered (technically and cognitively) by the end of any core training program in gastrointestinal endoscopy. 2: ESGE and ESGENA consider the following as basic endoscopic procedures: diagnostic upper and lower GI endoscopy, as well as a limited range of interventions such as: tissue acquisition via cold biopsy forceps, polypectomy for lesions ≤ 10 mm, hemostasis techniques, enteral feeding tube placement, foreign body retrieval, dilation of simple esophageal strictures, and India ink tattooing of lesion location. 3: ESGE and ESGENA recommend that training in GI endoscopy should be subject to stringent formal requirements that ensure all ESGE key performance indicators (KPIs) are met. 4: Training in basic endoscopic procedures is a complex process and includes the development and acquisition of cognitive, technical/motor, and integrative skills. Therefore, ESGE and ESGENA recommend the use of validated tools to track the development of skills and assess competence. 5: ESGE and ESGENA recommend incorporating a multimodal approach to evaluating competence in basic GI endoscopic procedures, including procedural thresholds and the measurement and documentation of established ESGE KPIs. 7: ESGE and ESGENA recommend the continuous monitoring of ESGE KPIs during GI endoscopy training to ensure the trainee's maintenance of competence. 9: ESGE and ESGENA recommend that GI endoscopy training units fulfil the ESGE KPIs for endoscopy units and, furthermore, be capable of providing the dedicated personnel, infrastructure, and sufficient case volume required for successful training within a structured training program. 10: ESGE and ESGENA recommend that trainers in basic GI endoscopic procedures should be endoscopists with formal educational training in the teaching of endoscopy, which allows them to successfully and safely teach trainees.


Assuntos
Gastroenterologia , Humanos , Endoscopia Gastrointestinal/métodos , Endoscópios Gastrointestinais , Sociedades Médicas
4.
IEEE Trans Med Imaging ; 43(3): 1247-1258, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37971921

RESUMO

Assessing the critical view of safety in laparoscopic cholecystectomy requires accurate identification and localization of key anatomical structures, reasoning about their geometric relationships to one another, and determining the quality of their exposure. Prior works have approached this task by including semantic segmentation as an intermediate step, using predicted segmentation masks to then predict the CVS. While these methods are effective, they rely on extremely expensive ground-truth segmentation annotations and tend to fail when the predicted segmentation is incorrect, limiting generalization. In this work, we propose a method for CVS prediction wherein we first represent a surgical image using a disentangled latent scene graph, then process this representation using a graph neural network. Our graph representations explicitly encode semantic information - object location, class information, geometric relations - to improve anatomy-driven reasoning, as well as visual features to retain differentiability and thereby provide robustness to semantic errors. Finally, to address annotation cost, we propose to train our method using only bounding box annotations, incorporating an auxiliary image reconstruction objective to learn fine-grained object boundaries. We show that our method not only outperforms several baseline methods when trained with bounding box annotations, but also scales effectively when trained with segmentation masks, maintaining state-of-the-art performance.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Semântica
5.
Br J Surg ; 111(1)2024 Jan 03.
Artigo em Inglês | MEDLINE | ID: mdl-37935636

RESUMO

The growing availability of surgical digital data and developments in analytics such as artificial intelligence (AI) are being harnessed to improve surgical care. However, technical and cultural barriers to real-time intraoperative AI assistance exist. This early-stage clinical evaluation shows the technical feasibility of concurrently deploying several AIs in operating rooms for real-time assistance during procedures. In addition, potentially relevant clinical applications of these AI models are explored with a multidisciplinary cohort of key stakeholders.


Assuntos
Colecistectomia Laparoscópica , Humanos , Inteligência Artificial
6.
Surg Endosc ; 38(1): 229-239, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-37973639

RESUMO

BACKGROUND: The large amount of heterogeneous data collected in surgical/endoscopic practice calls for data-driven approaches as machine learning (ML) models. The aim of this study was to develop ML models to predict endoscopic sleeve gastroplasty (ESG) efficacy at 12 months defined by total weight loss (TWL) % and excess weight loss (EWL) % achievement. Multicentre data were used to enhance generalizability: evaluate consistency among different center of ESG practice and assess reproducibility of the models and possible clinical application. Models were designed to be dynamic and integrate follow-up clinical data into more accurate predictions, possibly assisting management and decision-making. METHODS: ML models were developed using data of 404 ESG procedures performed at 12 centers across Europe. Collected data included clinical and demographic variables at the time of ESG and at follow-up. Multicentre/external and single center/internal and temporal validation were performed. Training and evaluation of the models were performed on Python's scikit-learn library. Performance of models was quantified as receiver operator curve (ROC-AUC), sensitivity, specificity, and calibration plots. RESULTS: Multicenter external validation: ML models using preoperative data show poor performance. Best performances were reached by linear regression (LR) and support vector machine models for TWL% and EWL%, respectively, (ROC-AUC: TWL% 0.87, EWL% 0.86) with the addition of 6-month follow-up data. Single-center internal validation: Preoperative data only ML models show suboptimal performance. Early, i.e., 3-month follow-up data addition lead to ROC-AUC of 0.79 (random forest classifiers model) and 0.81 (LR models) for TWL% and EWL% achievement prediction, respectively. Single-center temporal validation shows similar results. CONCLUSIONS: Although preoperative data only may not be sufficient for accurate postoperative predictions, the ability of ML models to adapt and evolve with the patients changes could assist in providing an effective and personalized postoperative care. ML models predictive capacity improvement with follow-up data is encouraging and may become a valuable support in patient management and decision-making.


Assuntos
Gastroplastia , Obesidade Mórbida , Humanos , Gastroplastia/métodos , Obesidade/cirurgia , Reprodutibilidade dos Testes , Resultado do Tratamento , Redução de Peso , Aprendizado de Máquina , Obesidade Mórbida/cirurgia
7.
Surg Endosc ; 38(2): 488-498, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38148401

RESUMO

BACKGROUND: Minimally invasive total gastrectomy (MITG) is a mainstay for curative treatment of patients with gastric cancer. To define and standardize optimal surgical techniques and further improve clinical outcomes through the enhanced MITG surgical quality, there must be consensus on the key technical steps of lymphadenectomy and anastomosis creation, which is currently lacking. This study aimed to determine an expert consensus from an international panel regarding the technical aspects of the performance of MITG for oncological indications using the Delphi method. METHODS: A 100-point scoping survey was created based on the deconstruction of MITG into its key technical steps through local and international expert opinion and literature evidence. An international expert panel comprising upper gastrointestinal and general surgeons participated in multiple rounds of a Delphi consensus. The panelists voted on the issues concerning importance, difficulty, or agreement using an online questionnaire. A priori consensus standard was set at > 80% for agreement to a statement. Internal consistency and reliability were evaluated using Cronbach's α. RESULTS: Thirty expert upper gastrointestinal and general surgeons participated in three online Delphi rounds, generating a final consensus of 41 statements regarding MITG for gastric cancer. The consensus was gained from 22, 12, and 7 questions from Delphi rounds 1, 2, and 3, which were rephrased into the 41 statetments respectively. For lymphadenectomy and aspects of anastomosis creation, Cronbach's α for round 1 was 0.896 and 0.886, and for round 2 was 0.848 and 0.779, regarding difficulty or importance. CONCLUSIONS: The Delphi consensus defined 41 steps as crucial for performing a high-quality MITG for oncological indications based on the standards of an international panel. The results of this consensus provide a platform for creating and validating surgical quality assessment tools designed to improve clinical outcomes and standardize surgical quality in MITG.


Assuntos
Neoplasias Gástricas , Humanos , Técnica Delfos , Consenso , Neoplasias Gástricas/cirurgia , Reprodutibilidade dos Testes , Excisão de Linfonodo , Anastomose Cirúrgica , Gastrectomia
8.
Surg Endosc ; 37(10): 7412-7424, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37584774

RESUMO

BACKGROUND: Technical skill assessment in surgery relies on expert opinion. Therefore, it is time-consuming, costly, and often lacks objectivity. Analysis of intraoperative data by artificial intelligence (AI) has the potential for automated technical skill assessment. The aim of this systematic review was to analyze the performance, external validity, and generalizability of AI models for technical skill assessment in minimally invasive surgery. METHODS: A systematic search of Medline, Embase, Web of Science, and IEEE Xplore was performed to identify original articles reporting the use of AI in the assessment of technical skill in minimally invasive surgery. Risk of bias (RoB) and quality of the included studies were analyzed according to Quality Assessment of Diagnostic Accuracy Studies criteria and the modified Joanna Briggs Institute checklists, respectively. Findings were reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. RESULTS: In total, 1958 articles were identified, 50 articles met eligibility criteria and were analyzed. Motion data extracted from surgical videos (n = 25) or kinematic data from robotic systems or sensors (n = 22) were the most frequent input data for AI. Most studies used deep learning (n = 34) and predicted technical skills using an ordinal assessment scale (n = 36) with good accuracies in simulated settings. However, all proposed models were in development stage, only 4 studies were externally validated and 8 showed a low RoB. CONCLUSION: AI showed good performance in technical skill assessment in minimally invasive surgery. However, models often lacked external validity and generalizability. Therefore, models should be benchmarked using predefined performance metrics and tested in clinical implementation studies.


Assuntos
Inteligência Artificial , Procedimentos Cirúrgicos Minimamente Invasivos , Humanos , Academias e Institutos , Benchmarking , Lista de Checagem
9.
Med Image Anal ; 89: 102888, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37451133

RESUMO

Formalizing surgical activities as triplets of the used instruments, actions performed, and target anatomies is becoming a gold standard approach for surgical activity modeling. The benefit is that this formalization helps to obtain a more detailed understanding of tool-tissue interaction which can be used to develop better Artificial Intelligence assistance for image-guided surgery. Earlier efforts and the CholecTriplet challenge introduced in 2021 have put together techniques aimed at recognizing these triplets from surgical footage. Estimating also the spatial locations of the triplets would offer a more precise intraoperative context-aware decision support for computer-assisted intervention. This paper presents the CholecTriplet2022 challenge, which extends surgical action triplet modeling from recognition to detection. It includes weakly-supervised bounding box localization of every visible surgical instrument (or tool), as the key actors, and the modeling of each tool-activity in the form of triplet. The paper describes a baseline method and 10 new deep learning algorithms presented at the challenge to solve the task. It also provides thorough methodological comparisons of the methods, an in-depth analysis of the obtained results across multiple metrics, visual and procedural challenges; their significance, and useful insights for future research directions and applications in surgery.


Assuntos
Inteligência Artificial , Cirurgia Assistida por Computador , Humanos , Endoscopia , Algoritmos , Cirurgia Assistida por Computador/métodos , Instrumentos Cirúrgicos
10.
Sci Rep ; 13(1): 9235, 2023 06 07.
Artigo em Inglês | MEDLINE | ID: mdl-37286660

RESUMO

Surgical video analysis facilitates education and research. However, video recordings of endoscopic surgeries can contain privacy-sensitive information, especially if the endoscopic camera is moved out of the body of patients and out-of-body scenes are recorded. Therefore, identification of out-of-body scenes in endoscopic videos is of major importance to preserve the privacy of patients and operating room staff. This study developed and validated a deep learning model for the identification of out-of-body images in endoscopic videos. The model was trained and evaluated on an internal dataset of 12 different types of laparoscopic and robotic surgeries and was externally validated on two independent multicentric test datasets of laparoscopic gastric bypass and cholecystectomy surgeries. Model performance was evaluated compared to human ground truth annotations measuring the receiver operating characteristic area under the curve (ROC AUC). The internal dataset consisting of 356,267 images from 48 videos and the two multicentric test datasets consisting of 54,385 and 58,349 images from 10 and 20 videos, respectively, were annotated. The model identified out-of-body images with 99.97% ROC AUC on the internal test dataset. Mean ± standard deviation ROC AUC on the multicentric gastric bypass dataset was 99.94 ± 0.07% and 99.71 ± 0.40% on the multicentric cholecystectomy dataset, respectively. The model can reliably identify out-of-body images in endoscopic videos and is publicly shared. This facilitates privacy preservation in surgical video analysis.


Assuntos
Aprendizado Profundo , Laparoscopia , Humanos , Privacidade , Gravação em Vídeo , Colecistectomia
11.
Med Image Anal ; 88: 102866, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37356320

RESUMO

Searching through large volumes of medical data to retrieve relevant information is a challenging yet crucial task for clinical care. However the primitive and most common approach to retrieval, involving text in the form of keywords, is severely limited when dealing with complex media formats. Content-based retrieval offers a way to overcome this limitation, by using rich media as the query itself. Surgical video-to-video retrieval in particular is a new and largely unexplored research problem with high clinical value, especially in the real-time case: using real-time video hashing, search can be achieved directly inside of the operating room. Indeed, the process of hashing converts large data entries into compact binary arrays or hashes, enabling large-scale search operations at a very fast rate. However, due to fluctuations over the course of a video, not all bits in a given hash are equally reliable. In this work, we propose a method capable of mitigating this uncertainty while maintaining a light computational footprint. We present superior retrieval results (3%-4% top 10 mean average precision) on a multi-task evaluation protocol for surgery, using cholecystectomy phases, bypass phases, and coming from an entirely new dataset introduced here, surgical events across six different surgery types. Success on this multi-task benchmark shows the generalizability of our approach for surgical video retrieval.


Assuntos
Algoritmos , Laparoscopia , Humanos , Colecistectomia , Incerteza
12.
Pancreatology ; 23(5): 543-549, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37236853

RESUMO

BACKGROUND/OBJECTIVES: Insulinomas are rare, functioning pancreatic neuroendocrine neoplasms (pNEN), whose gold standard therapy is surgical resection. Endoscopic ultrasound-guided radiofrequency ablation (EUS-RFA) is a recent technique that has emerged as a minimally invasive therapeutic option for patients with pancreatic lesions not eligible for surgery. In this study, we aimed to describe a series of patients with unresectable pancreatic insulinoma treated with EUS-RFA. METHODS: This is a single-center, retrospective study including all consecutive patients with functioning pancreatic insulinoma undergoing EUS-RFA for surgical unfitness or surgery refusal, between March 2017 and September 2021. Technical success (i.e., complete mass ablation), adverse event rate and severity, clinical and radiologic outcomes (i.e., symptom remission with a normal concentration of blood glucose, and the presence of intralesional necrosis), and post-procedural follow-up were assessed. RESULTS: A total of 10 patients (mean age: 67.1 ± 10.1years; F:M 7:3) were included. The mean size of insulinoma was 11.9 ± 3.3 mm. Technical success and clinical remission were achieved in 100% of patients. Only one (10%) patient was successfully treated with two RFA sessions. Two procedure-related early adverse events occurred, including two (20%) cases of mild abdominal pain. No major complications were observed. The complete radiologic response within 3 months after EUS-RFA was observed in all patients (100%). After a median follow-up of 19.5 (range12-59) months, symptom remission and persistent euglycemia were assessed in all the patients. CONCLUSIONS: Data from this case series suggest that EUS-RFA is a feasible and safe therapeutic approach for pancreatic insulinomas in patients unwilling or unable to undergo surgery with medium-term efficacy.


Assuntos
Insulinoma , Neoplasias Pancreáticas , Ablação por Radiofrequência , Humanos , Pessoa de Meia-Idade , Idoso , Insulinoma/diagnóstico por imagem , Insulinoma/cirurgia , Insulinoma/patologia , Estudos Retrospectivos , Neoplasias Pancreáticas/diagnóstico por imagem , Neoplasias Pancreáticas/cirurgia , Neoplasias Pancreáticas/patologia , Ablação por Radiofrequência/métodos , Endossonografia/métodos , Ultrassonografia de Intervenção
13.
IEEE Trans Med Imaging ; 42(9): 2592-2602, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37030859

RESUMO

Automatic recognition of fine-grained surgical activities, called steps, is a challenging but crucial task for intelligent intra-operative computer assistance. The development of current vision-based activity recognition methods relies heavily on a high volume of manually annotated data. This data is difficult and time-consuming to generate and requires domain-specific knowledge. In this work, we propose to use coarser and easier-to-annotate activity labels, namely phases, as weak supervision to learn step recognition with fewer step annotated videos. We introduce a step-phase dependency loss to exploit the weak supervision signal. We then employ a Single-Stage Temporal Convolutional Network (SS-TCN) with a ResNet-50 backbone, trained in an end-to-end fashion from weakly annotated videos, for temporal activity segmentation and recognition. We extensively evaluate and show the effectiveness of the proposed method on a large video dataset consisting of 40 laparoscopic gastric bypass procedures and the public benchmark CATARACTS containing 50 cataract surgeries.


Assuntos
Redes Neurais de Computação , Cirurgia Assistida por Computador
14.
Int J Comput Assist Radiol Surg ; 18(9): 1665-1672, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36944845

RESUMO

PURPOSE: Automatic recognition of surgical activities from intraoperative surgical videos is crucial for developing intelligent support systems for computer-assisted interventions. Current state-of-the-art recognition methods are based on deep learning where data augmentation has shown the potential to improve the generalization of these methods. This has spurred work on automated and simplified augmentation strategies for image classification and object detection on datasets of still images. Extending such augmentation methods to videos is not straightforward, as the temporal dimension needs to be considered. Furthermore, surgical videos pose additional challenges as they are composed of multiple, interconnected, and long-duration activities. METHODS: This work proposes a new simplified augmentation method, called TRandAugment, specifically designed for long surgical videos, that treats each video as an assemble of temporal segments and applies consistent but random transformations to each segment. The proposed augmentation method is used to train an end-to-end spatiotemporal model consisting of a CNN (ResNet50) followed by a TCN. RESULTS: The effectiveness of the proposed method is demonstrated on two surgical video datasets, namely Bypass40 and CATARACTS, and two tasks, surgical phase and step recognition. TRandAugment adds a performance boost of 1-6% over previous state-of-the-art methods, that uses manually designed augmentations. CONCLUSION: This work presents a simplified and automated augmentation method for long surgical videos. The proposed method has been validated on different datasets and tasks indicating the importance of devising temporal augmentation methods for long surgical videos.


Assuntos
Extração de Catarata , Redes Neurais de Computação , Humanos , Algoritmos , Extração de Catarata/métodos
15.
Therap Adv Gastroenterol ; 16: 17562848231155984, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36895283

RESUMO

Endoscopic retrograde cholangiopancreatography (ERCP) is an advanced endoscopic procedure that might lead to severe adverse events. Post-ERCP pancreatitis (PEP) is the most common post-procedural complication, which is related to significant mortality and increasing healthcare costs. Up to now, the prevalent approach to prevent PEP consisted of employing pharmacological and technical expedients that have been shown to improve post-ERCP outcomes, such as the administration of rectal nonsteroidal anti-inflammatory drugs, aggressive intravenous hydration, and the placement of a pancreatic stent. However, it has been reported that PEP originates from a more complex interaction of procedural and patient-related factors. Appropriate ERCP training has a pivotal role in PEP prevention strategy, and it is not a chance that a low PEP rate is universally considered one of the most relevant indicators of proficiency in ERCP. Scant data on the acquisition of skills during the ERCP training are currently available, although some efforts have been recently done to shorten the learning curve by way of simulation-based training and demonstrate competency by meeting technical requirements as well as adopting skill evaluation scales. Besides, the identification of adequate indications for ERCP and accurate pre-procedural risk stratification of patients might help to reduce PEP occurrence regardless of the endoscopist's technical abilities, and generally preserve safety in ERCP. This review aims at delineating current preventive strategies and highlighting novel perspectives for a safer ERCP focusing on the prevention of PEP.

16.
Updates Surg ; 75(3): 627-634, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36899291

RESUMO

Perirectal hematoma (PH) is one of the most feared complications of stapling procedures. Literature reviews have reported only a few works on PH, most of them describing isolated treatment approaches and severe outcomes. The aim of this study was to analyze a homogenous case series of PH and to define a treatment algorithm for huge postoperative PHs. A retrospective analysis of a prospective database of three high-volume proctology units was performed between 2008 and 2018, and all PH cases were analyzed. In all, 3058 patients underwent stapling procedures for hemorrhoidal disease or obstructed defecation syndrome with internal prolapse. Among these, 14 (0.46%) large PH cases were reported, and 12 of these hematomas were stable and treated conservatively (antibiotics and CT/laboratory test monitoring); most of them were resolved with spontaneous drainage. Two patients with progressive PH (signs of active bleeding and peritonism) were submitted to CT and arteriography to evaluate the source of bleeding, which was subsequently closed by embolization. This approach helped ensure that no patients with PH were referred for major abdominal surgery. Most PH cases are stable and treatable with a conservative approach, evolving with self-drainage. Progressive hematomas are rare and should undergo angiography with embolization to minimize the possibility of major surgery and severe complications.


Assuntos
Hemorroidas , Humanos , Hemorroidas/cirurgia , Defecação , Estudos Retrospectivos , Grampeamento Cirúrgico/efeitos adversos , Grampeamento Cirúrgico/métodos , Prolapso , Hematoma/etiologia , Hematoma/terapia , Resultado do Tratamento , Complicações Pós-Operatórias/terapia , Complicações Pós-Operatórias/cirurgia
18.
Surg Endosc ; 37(6): 4321-4327, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36729231

RESUMO

BACKGROUND: Surgical video recording provides the opportunity to acquire intraoperative data that can subsequently be used for a variety of quality improvement, research, and educational applications. Various recording devices are available for standard operating room camera systems. Some allow for collateral data acquisition including activities of the OR staff, kinematic measurements (motion of surgical instruments), and recording of the endoscopic video streams. Additional analysis through computer vision (CV), which allows software to understand and perform predictive tasks on images, can allow for automatic phase segmentation, instrument tracking, and derivative performance-geared metrics. With this survey, we summarize available surgical video acquisition technologies and associated performance analysis platforms. METHODS: In an effort promoted by the SAGES Artificial Intelligence Task Force, we surveyed the available video recording technology companies. Of thirteen companies approached, nine were interviewed, each over an hour-long video conference. A standard set of 17 questions was administered. Questions spanned from data acquisition capacity, quality, and synchronization of video with other data, availability of analytic tools, privacy, and access. RESULTS: Most platforms (89%) store video in full-HD (1080p) resolution at a frame rate of 30 fps. Most (67%) of available platforms store data in a Cloud-based databank as opposed to institutional hard drives. CV powered analysis is featured in some platforms: phase segmentation in 44% platforms, out of body blurring or tool tracking in 33%, and suture time in 11%. Kinematic data are provided by 22% and perfusion imaging in one device. CONCLUSION: Video acquisition platforms on the market allow for in depth performance analysis through manual and automated review. Most of these devices will be integrated in upcoming robotic surgical platforms. Platform analytic supplementation, including CV, may allow for more refined performance analysis to surgeons and trainees. Most current AI features are related to phase segmentation, instrument tracking, and video blurring.


Assuntos
Inteligência Artificial , Procedimentos Cirúrgicos Robóticos , Humanos , Endoscopia , Software , Privacidade , Gravação em Vídeo
19.
IEEE Trans Med Imaging ; 42(7): 1920-1931, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36374877

RESUMO

Recent advancements in deep learning methods bring computer-assistance a step closer to fulfilling promises of safer surgical procedures. However, the generalizability of such methods is often dependent on training on diverse datasets from multiple medical institutions, which is a restrictive requirement considering the sensitive nature of medical data. Recently proposed collaborative learning methods such as Federated Learning (FL) allow for training on remote datasets without the need to explicitly share data. Even so, data annotation still represents a bottleneck, particularly in medicine and surgery where clinical expertise is often required. With these constraints in mind, we propose FedCy, a federated semi-supervised learning (FSSL) method that combines FL and self-supervised learning to exploit a decentralized dataset of both labeled and unlabeled videos, thereby improving performance on the task of surgical phase recognition. By leveraging temporal patterns in the labeled data, FedCy helps guide unsupervised training on unlabeled data towards learning task-specific features for phase recognition. We demonstrate significant performance gains over state-of-the-art FSSL methods on the task of automatic recognition of surgical phases using a newly collected multi-institutional dataset of laparoscopic cholecystectomy videos. Furthermore, we demonstrate that our approach also learns more generalizable features when tested on data from an unseen domain.


Assuntos
Aprendizado de Máquina Supervisionado , Procedimentos Cirúrgicos Operatórios , Gravação em Vídeo
20.
Endosc Int Open ; 10(11): E1474-E1480, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36397868

RESUMO

Background and study aims Artificial intelligence (AI) is set to impact several fields within gastroenterology. In gastrointestinal endoscopy, AI-based tools have translated into clinical practice faster than expected. We aimed to evaluate the status of research for AI in gastroenterology while predicting its future applications. Methods All studies registered on Clinicaltrials.gov up to November 2021 were analyzed. The studies included used AI in gastrointestinal endoscopy, inflammatory bowel disease (IBD), hepatology, and pancreatobiliary diseases. Data regarding the study field, methodology, endpoints, and publication status were retrieved, pooled, and analyzed to observe underlying temporal and geographical trends. Results Of the 103 study entries retrieved according to our inclusion/exclusion criteria, 76 (74 %) were based on AI application to gastrointestinal endoscopy, mainly for detection and characterization of colorectal neoplasia (52/103, 50 %). Image analysis was also more frequently reported than data analysis for pancreaticobiliary (six of 10 [60 %]), liver diseases (eight of nine [89 %]), and IBD (six of eight [75 %]). Overall, 48 of 103 study entries (47 %) were interventional and 55 (53 %) observational. In 2018, one of eight studies (12.5 %) were interventional, while in 2021, 21 of 34 (61.8 %) were interventional, with an inverse ratio between observational and interventional studies during the study period. The majority of the studies were planned as single-center (74 of 103 [72 %]) and more were in Asia (45 of 103 [44 %]) and Europe (44 of 103 [43 %]). Conclusions AI implementation in gastroenterology is dominated by computer-aided detection and characterization of colorectal neoplasia. The timeframe for translational research is characterized by a swift conversion of observational into interventional studies.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...